Task assignment is a key issue in mobile crowdsensing (MCS). Previous task assignment methods were mainly static offline assignment. However, the MCS platform needs to process dynamically changing workers and tasks online in the actual assignment process. Hence, a reliable dynamic assignment strategy is crucial to improving the platform’s efficiency. This paper proposes an MCS dynamic task assignment framework to solve the task maximization assignment problem with spatiotemporal properties. First, a single worker is modeled for the Markov decision process, and a deep reinforcement learning algorithm (DDQN) is used to perform offline learning on historical task data. Then, in the dynamic assignment process, we consider the impact of current decisions on future decisions. Use the maximum flow model to maximize the number of tasks completed in each period while maximizing the expected Q value of all workers to achieve the optimal global assignment. Experiments show that the strategy proposed in this paper has good performance compared with the baseline strategy under different conditions.
Loading....